Scalable Mixed-Integer Optimization with Neural Constraintsvia Dual Decomposition

, , , ,

Embedding deep neural networks (NNs) into mixed-integer programs (MIPs) is attractive for decision making with learned constraints, yet state-of-the-art "monolithic" linearisations blow up in size and quickly become intractable. In this paper, we introduce a novel dual-decomposition framework that relaxes the single coupling equality u = x with an augmented lagrange multiplier and splits the problem into a vanilla MIP and a constrained NN block. Each part is tackled by the solver that suits it best---branch & cut for the MIP subproblem, first-order optimisation for the NN subproblem---so the model remains modular, the number of integer variables never grows with network depth, and the per-iteration cost scales only linearly with the NN size. On the public SURROGATELIB benchmark, our method proves scalable, modular, and adaptable: it runs 120x faster than an exact Big-M formulation on the largest test case; the NN sub-solver can be swapped from a log-barrier interior step to a projectedgradient routine with no code changes; and swapping the MLP for an LSTM backbone still completes the full optimisation in 47s without any bespoke adaptation.

» Read on
Shuli Zeng, Sijia Zhang, Feng Wu, Shaojie Tang, Xiangyang Li. Scalable Mixed-Integer Optimization with Neural Constraintsvia Dual Decomposition. In Proceedings of the 40th Annual AAAI Conference on Artificial Intelligence (AAAI), pages 1-9, Singapore, January 2026.
Save as file
@inproceedings{ZZWTLaaai26,
 address = {Singapore},
 author = {Shuli Zeng and Sijia Zhang and Feng Wu and Shaojie Tang and Xiangyang Li},
 booktitle = {Proceedings of the 40th Annual AAAI Conference on Artificial Intelligence (AAAI)},
 month = {January},
 pages = {1-9},
 title = {Scalable Mixed-Integer Optimization with Neural Constraintsvia Dual Decomposition},
 year = {2026}
}